Back

PLOS Computational Biology

Public Library of Science (PLoS)

Preprints posted in the last 90 days, ranked by how well they match PLOS Computational Biology's content profile, based on 1633 papers previously published here. The average preprint has a 1.32% match score for this journal, so anything above that is already an above-average fit.

1
Push-and-pull protein dynamics leads to log-normal synaptic sizes and probabilistic multi-spine plasticity

Petkovic, J.; Eggl, M.; Pathirana, D.; Chater, T. E.; Hasenauer, J.; Rizzoli, S.; Tchumatchenko, T.

2026-01-29 neuroscience 10.64898/2026.01.29.702571 medRxiv
Top 0.1%
61.9%
Show abstract

A typical neuron receives thousands of inputs and is able to adapt the strength of its synapses to store new information and meet ongoing computational demands. The synaptic response to plasticity induction is stochastic and spatially structured but is traditionally described by deterministic models representing the "average" dynamics. Growing experimental evidence indicates that not only the stimulation protocol determines the plasticity outcome but that the initial synaptic sizes, their fluctuations, and the spatial competition for the plasticity-relevant proteins play a decisive role. This probabilistic perspective makes it hard to predict the fate of a given synapse and requires a conceptual shift from a single synapse view to a probabilistic multi-spine competitive process where the plasticity needs and the available resources are considered together. Here, we propose a data-driven modeling framework able to predict collective plasticity outcomes along a dendrite based on the initial size, the number, and the spatial distance between simultaneously stimulated synapses. Our data analysis reveals a log-normal distribution of protein numbers for many plasticity-mediating proteins and shows that this log-normal protein allocation constrains and controls the collective plasticity outcome across multiple stimulated and non-stimulated synapses while preserving a global size distribution. Our findings highlight how local stochastic processes and global protein allocation rules give rise to synaptic plasticity outcomes, offering a new framework to understand and predict dendritic computation.

2
The transfer function as a tool to reduce morphological models into point-neuron models

Daou, M.; Jovanic, T.; Destexhe, A.

2026-03-24 neuroscience 10.64898/2026.03.20.713213 medRxiv
Top 0.1%
55.6%
Show abstract

Building a simple model that precisely and functionally characterizes a neuron is a challenging and important task to select the best concise and computationally efficient model. However, this type of work has only been done for subthreshold properties of neurons. Here, we take a different perspective and suggest a method to obtain point-neuron models from morphologically-detailed models with dendrites. To do this, we focus on the functional characterization of the neuron response under in vivo conditions, and compute the transfer function of the detailed model. The parameters of this transfer function, in terms of mean voltage, voltage standard deviation and correlation time, can be used to compute the "best" point-neuron model that generates a transfer function very close to that of the morphologically-detailed model. We illustrate this approach for two very different neuronal morphologies, one from Drosophila larvae and one from mammals. In conclusion, this approach provides a tool to generate point-neuron models from detailed models, based on a functional characterization of the neuron response. Significance StatementThis study provides a new computational method to reduce morphological models into point-neuron models. To do so, we calculate the transfer function parameters, ie the voltage standard deviation, the mean voltage and the correlation time, of the morphological model and fit a point neuron-model onto this data. Here, we successfully apply this approach for two very different neuron morphologies, a drosophila neuron and a rat motoneuron.

3
Nonparametric Bayesian Contextual Control: Integrating Automatisation and Prior Knowledge for Stable Adaptive Behaviour

Hranova, S.; Kiebel, S.; Smolka, M. N.; Schwöbel, S.

2026-02-28 neuroscience 10.64898/2026.02.26.708143 medRxiv
Top 0.1%
51.4%
Show abstract

Humans have a remarkable ability to act efficiently and accurately in familiar situations while remaining flexible in novel circumstances. Nonparametric contextual inference has been proposed as a computational principle that can model how agents achieve flexible yet stable behaviour in dynamic and possibly unknown environments. However, it remains an open question how humans learn, deploy and reuse stable contextual task representations so efficiently. To address this question, we propose the nonparametric Bayesian Contextual Control (NP-BCC) model, which integrates nonparametric contextual learning with two well-established cognitive mechanisms: repetition-based automatisation and schema-like prior knowledge. These two mechanisms are assumed to support behavioural stability and facilitate novel task acquisition. Simulations in dynamic multi-armed bandit tasks of increasing difficulty illustrate how the NP-BCC can acquire and reuse contextual task representations, with the proposed mechanisms operating in the intended, functionally meaningful manner. Specifically, we show via simulations that automatisation not only enhances task performance but also stabilizes contextual inference and structure learning, while structured prior knowledge accelerates the acquisition of novel contexts. We discuss the implications of our findings for computational accounts of adaptive behaviour and contextual learning, and outline directions for future empirical work, including investigations of context-dependent behavioural dysregulation relevant to conditions such as substance use disorders. Author summaryPeople are very good at repeating well-learned actions in familiar situations, but they can also quickly adjust their behaviour when circumstances change. How the brain balances stability and flexibility is still not fully understood. There is growing evidence that the brain organizes experience into different "contexts", which are mental representations of encountered situations. Computational models based on this idea can in principle reproduce flexible behaviour, but they often become unstable in complex environments. To improve stability, we borrow two simple strategies from everyday human behaviour. First, people tend to repeat actions that have worked well before. Second, when facing something new, they often reuse strategies from similar past situations. Using simulations, we show that combining these strategies with context-based learning produces more reliable behaviour in the model. Prior experience helps the model understand new situations more quickly, while repeated actions help stabilise behaviour once a situation becomes familiar. Taken together, our findings show how such mechanisms can give rise to both flexible and stable behaviour in the model.

4
Uncertainty Aware Decision Support with Computationally Expensive Simulation Models: A Case Study of HIV Intervention Scenarios

fadikar, a.; Hotton, A.; de Lima, P. N.; Vardavas, R.; Collier, N.; Jia, K.; Rimer, S.; Khanna, A.; Schneider, J.; Ozik, J.

2026-04-17 hiv aids 10.64898/2026.04.15.26350970 medRxiv
Top 0.1%
47.0%
Show abstract

Detailed agent-based simulations are increasingly used to support policy decisions, but their computational cost and complex uncertainty structure make systematic scenario analysis challenging. We present a data-driven, uncertainty-aware decision support (DDUADS) workflow for using stochastic simulation models as decision-support tools under limited computational budgets. The approach combines several established techniques-sensitivity screening, Bayesian calibration using simulation-based inference, and multi-surrogate model integration for translational efficiency-into a coherent pipeline that enables uncertainty-aware policy analysis. Rather than producing a single baseline, the calibration stage yields a posterior distribution over plausible model parameterizations, allowing flexible, uncertainty-aware forward projections. We demonstrate the DDUADS workflow on the INFORM-HIV agent-based model of HIV transmission in Chicago to evaluate potential disruptions in antiretroviral therapy (ART) and pre-exposure prophylaxis (PrEP) use. While the specific application is HIV modeling, the challenges and techniques described here arise in other simulation studies and can be applied to decision support in other domains.

5
A Comparison of Mechanisms Driving Lesion Outcomes during Lung Tumor and Tuberculosis Granuloma Formation

Michael, C. T.; Budak, M.; Kirschner, D.

2026-02-27 cancer biology 10.64898/2026.02.25.708029 medRxiv
Top 0.1%
44.8%
Show abstract

Small cell lung cancer (SCLC) and tuberculosis (TB) are both deadly diseases that present with spatially complex lung lesions. These lesions share many similarities, including several key spatial interactions between T cells and macrophages. Both SCLC and TB present with significant heterogeneity, both in terms of progression of disease and responses to treatment; current experimental methods have few tools to investigate the spatiotemporal evolution of these lesions within human lungs. We have applied our computational agent-based model, GranSim, to extensively study heterogeneity of TB granuloma scale formation, infection outcome and treatment in detail. We introduce TumorSim, an analogous agent-based model designed to understand the heterogeneity of SCLC lung tumors. TumorSim mechanistically and spatio-temporally captures immune-tumor interactions, many of which are well-studied in isolation, including cytokine-based recruitment of adaptive cells and PD1/PDL1-based inhibition of cytotoxic T-cell activity. Drawing from known lung immunology as well as literature on lung tumor responses, we define and explore a wide set of parameters to characterize TumorSim behavior using global sensitivity analysis. We compare factors that drive dynamics of both SCLC tumors and TB granulomas. As model validation, sensitivity analysis captures several well-known correlates of improved SCLC outcomes including macrophage-mediated cytotoxic T-cell recruitment. Surprisingly, both models predict a two-phase formation process occurring with an abrupt change in tumor/granuloma dynamics upon arrival of adaptive immune cells into the lung from lung-draining lymph nodes. Simulations suggest that while CCL5 is associated with improved tumor control later during tumor growth, CCL5 plays a pro-tumor role early during tumor growth by recruiting regulatory T cells. We also find that, similar to virtual TB granulomas, TumorSim tumors are increased in volume when immunosuppressive mechanisms outweigh pro-inflammatory responses. This novel tumor model can serve as a basis for future studies on lung tumor-immune dynamics to study both immunotherapeutics and anti-cancer drugs.

6
Biologically informed genetic data transformations improve multi-omic comorbidity prediction in people with HIV

Ryan, B.; Thorball, C. W.; Ait Oumelloul, M.; Kouyos, R.; Tarr, P. E.; Fellay, J.

2026-03-10 hiv aids 10.64898/2026.03.09.26347570 medRxiv
Top 0.1%
44.6%
Show abstract

Coronary artery disease (CAD) and chronic kidney disease (CKD) are in part genetically determined and are associated with various omics layers. Methods for integrating genomics data with omics profiles remain to be standardised. This study evaluates biological data transformations to optimise the integration of genomics with other omics for comorbidity prediction in people with HIV (PWH). We trained linear and deep-learning single-omic and multi-omic models on two cohorts of PWH with genotype and one other omics data available. 436 CAD cases and 166 CKD were evenly split across train/validation/test cohorts. Multi-omic integration evaluated feature concatenation against encoder-based architectures and performance was estimated via five-fold cross-validation on fixed patient splits, reporting mean accuracy and standard errors. Genotype data was represented in four ways: (i) raw SNP genotype matrices; (ii) principal component (PCA) embeddings; (iii) polygenic risk scores (PRS); and (iv) AlphaGenome-derived gene-level impact scores. Each genotype representation was compared individually and when integrated in a multi-omics model. The results demonstrate that biologically informed genomic transformations improve prediction in multi-omics models. In both classification tasks, integrating raw SNPs (CAD accuracy = 0.55 {+/-} 0.03; CKD accuracy = 0.63 {+/-} 0.01) or genotype PCs (CAD accuracy = 0.54 {+/-} 0.03; CKD accuracy = 0.62 {+/-} 0.03) with other omics reduced performance relative to the best corresponding single-omics models. By contrast, PRS (CAD accuracy = 0.61 {+/-} 0.03; CKD accuracy = 0.65 {+/-} 0.02) and AlphaGenome (CAD accuracy = 0.57 {+/-} 0.03; CKD accuracy = 0.67 {+/-} 0.02) improved accuracy. As multi-omics analyses become more prominent, methods that integrate genomics effectively without requiring large cohorts will become increasingly valuable; here, we highlight two such approaches.

7
DENcode: A model for haplotype-informed transmission probability of dengue virus

Maduranga, S.; Arroyo, B. M. V.; Sigera, C.; Weeratunga, P.; Fernando, D.; Rajapakse, S.; Lloyd, A. R.; Bull, R. A.; Stone, H.; Rodrigo, C.

2026-02-27 bioinformatics 10.64898/2026.02.26.708194 medRxiv
Top 0.1%
43.2%
Show abstract

Dengue virus transmission networks are often only partially resolved, due to gaps in sampling, unobserved mosquito-mediated transmission, and using methods (phylogenetics) that describe evolutionary relatedness but not explicit, probabilistic transmission links between individual infections. We developed DENcode, a framework to estimate the relative likelihood of vector-mediated transmission between pairs of dengue cases by combining a temperature- and time-modulated epidemiological kernel, which captures the extrinsic incubation period and human infectiousness, with a phylogenetically informed genetic similarity kernel derived from patristic distances between viral haplotypes or consensus sequences. Validation with a real-life dataset of 90 dengue infections sampled from Colombo, Sri Lanka between 2017 - 2020 and sequenced to resolve within-host haplotypes, DENcode estimates were stable across 100 Monte Carlo iterations, yielding narrow credible intervals (median width <0.001) and consistent top-ranked transmission pairs. Sensitivity analyses using ablation experiments showed that removing either the genetic or epidemiological component substantially altered the distribution of linkage probabilities, indicating that both contribute meaningfully to the inferred transmission structure. Serotype-specific transmission networks constructed from pairwise linkage probabilities from DENcode were analysed using degree- and path-based centrality measures at probability thresholds of 0.1 and 0.5, revealing relative importance of cases to disease transmission within the community. Haplotype-derived networks were more informative than consensus-based networks (x 3.6 and x 1.6 times more edges for DENV2 and 3 respectively). DENcode is a robust framework to explore dengue transmission within a community that provides an output of network of transmission probabilities informed by pathogen genetic similarity and clinical epidemiological parameters. Author summaryTracing epidemics of dengue in setting where dengue transmission happens continuously poses many challenges especially with limited availability of genomic surveillance. Here we introduce a model that uses genomic data together with time and location data to calculate a probability of two cases of dengue being related to each other. Using data from the Colombo dengue study, from 2017 to 2020, Sri Lanka, we evaluated the model. We used haplotype level sequences that correspond to the viral variation within the human host and consensus level sequences that average the data from a single human host into a single sequence. We constructed transmission probability networks for each dengue serotype and were able to identify patients who played key roles in the corresponding networks. We were able to show that this model is robust and will be a valuable tool in the context of dengue control.

8
Stochastic optimal control simulations of walking: potential and perspective

D'Hondt, L.; Afschrift, M.; De Groote, F.

2026-03-20 systems biology 10.64898/2026.03.19.712839 medRxiv
Top 0.1%
42.0%
Show abstract

Human walking is intrinsically variable. For example, there is considerable stride to stride variability even when walking speed is constant. This variability is due to uncertainty in the sensorimotor system and the environment, and is shaped by both musculoskeletal dynamics (e.g. joint stiffness and damping originating from muscles) and the control strategy used to mitigate the effects of uncertainty. Yet, insight into how sensorimotor noise shapes walking variability is limited due to a lack of experimental methods to assess sensorimotor noise and control strategies during walking. Simulations that account for uncertainty can elucidate how sensorimotor noise affects movement variability but due to numerical challenges, accounting for sensorimotor noise is not common in simulations of walking. Existing simulations have hugely simplified musculoskeletal dynamics (e.g. no muscles), the control policy (e.g. pre-defined feedback loops), or sensorimotor noise sources (e.g. only motor noise). Here, we performed stochastic optimal control simulations of walking based on a model with 9 degrees of freedom and 18 muscles to study how the level of sensory and motor noise influences walking. We solved for feedforward muscle excitations and full-state time-varying feedback gains that minimised expected effort while generating periodic, and hence stable, gait patterns. To enable these simulations, we approximated the state distribution with a Gaussian and used an unscented transform to propagate the state covariance. Resulting optimisation problems were solved with direct collocation. Sensorimotor noise level had a small effect on the mean kinematics but shaped kinematic and muscle activity variability as well as expected effort. Although simulations underestimated the magnitude of experimental positional variability, they captured its structure. In agreement with experimental results, the control policy prioritised limiting variability of centre of mass kinematics and minimal swing foot clearance over limiting joint angle variability. Hence, our simulations suggest that effort minimisation underlies these observations. Author summaryWhen performing a movement multiple times, each repetition will be slightly different due to random disturbances in the neural signals used to control movement, i.e. sensorimotor noise. Because it is difficult to measure inside the nervous system of a moving person, computer simulations are used to study movement control. They found that both sensorimotor noise and musculoskeletal mechanics determine how people control arm movements and standing. However, there are no simulations of walking that systematically evaluated how sensorimotor noise level influences walking kinematics because they pose computational challenges. Here, we proposed and used an approach for minimal effort simulations of walking in the presence of uncertainty. We imposed forward speed and stability but not kinematics. We found that the level of sensorimotor noise had little effect on the mean movement but a strong effect on the variability and the expected effort. The control strategy prioritised reducing the variability of the centre of mass position and swing foot clearance over reducing the variability of individual joint angles, which is also observed in experiments. Interestingly, strict control of centre of mass position and foot clearance in our simulations emerged from minimising effort.

9
A Cohort-Based Global Sensitivity Benchmark of MRI-Derived Whole-Heart Electromechanical Models in Healthy Hearts

Rahmani, S.; Pouliopoulos, J.; W. C. Lee, A.; Barrows, R. K.; Solis-Lemus, J. A.; Strocchi, M.; Rodero, C.; Qayyum, A.; Lashkarinia, S.; Roney, C.; Augustin, C. M.; Plank, G.; Fatkin, D.; Jabbour, A.; Niederer, S. A.

2026-03-30 systems biology 10.64898/2026.03.27.714701 medRxiv
Top 0.1%
41.6%
Show abstract

Patient-specific four-chamber electromechanical models provide a physics-constrained framework for investigating whole-heart cardiac physiology and disease mechanisms. Identifying which model parameters impact whole-heart function is important for understanding cellular-, tissue-, and organ-scale determinants of cardiac performance and for calibrating patient-specific models. However, previous global sensitivity analyses of cardiac electromechanical models have typically been performed on a single heart, and systematic evaluation of how parameter influence compares across anatomically different subjects remains limited. We created four-chamber electromechanical models using cardiac MRI from five healthy subjects (n = 5). The models simulated atrial and ventricular cellular electrophysiology, calcium dynamics, and active contraction, with heterogeneous fibre orientation, transversely isotropic tissue mechanics, pericardial constraint, and a closed-loop cardiovascular system providing physiological boundary conditions. In total, 46 parameters described the integrated model. Using Gaussian process emulators, we performed multi-scale global sensitivity analysis to evaluate the relative contribution of model parameters to left and right atrial and ventricular function. Across all anatomies, the most influential parameters were systemic and pulmonary resistances, ventricular end-diastolic pressures, and the venous reference pressure, highlighting the dominant role of haemodynamic loading conditions in governing pressure- and volume-based outputs. A chamber-level analysis of atrioventricular coupling revealed a phase-dependent pattern. Atrial pressures were predominantly governed by global haemodynamic parameters (> 90% of total sensitivity), atrial filling volumes showed substantial ventricular influence ({approx}40-55% across anatomies), and atrial end-systolic volumes were primarily determined by intrinsic atrial parameters ({approx}60-65%). These patterns were consistent across subjects despite differences in anatomy. We show that, in healthy male subjects, inter-individual anatomical variation does not substantially change the ranking of dominant parameters. This work provides a repeatable modelling and sensitivity analysis framework and establishes a benchmark reference for whole-heart electromechanical modelling in healthy hearts. Author summaryComputational models of the heart can simulate cardiac physiology in unprecedented detail, but these models contain many parameters whose influence on predicted function is not fully understood. We built patient-specific four-chamber heart models from MRI scans of five healthy subjects and used statistical methods to systematically test how 46 model parameters affect simulated cardiac performance. Across all five subjects, we found that the haemodynamic loading parameters, including systemic and pulmonary vascular resistance, ventricular filling pressures, and the venous reference pressure, consistently had the greatest influence on the model outputs, regardless of differences in individual heart anatomy. This finding suggests that in healthy resting conditions, the boundary conditions of the cardiovascular system, rather than individual differences in heart geometry or electrical properties, are the primary drivers of whole-heart function. We also found a structured coupling pattern between the upper and lower heart chambers, where global haemodynamic parameters dominate atrial pressure regulation, ventricular mechanics shape atrial filling, and intrinsic atrial properties control atrial emptying. This work provides a benchmark dataset of five anatomically detailed heart models and a sensitivity analysis framework to guide calibration of future cardiac digital twin models.

10
Simulation of neurotransmitter release and its imaging by fluorescent sensors

Gretz, J.; Mohr, J. M.; Hill, B. F.; Andreeva, V.; Erpenbeck, L.; Kruss, S.

2026-03-25 neuroscience 10.64898/2026.03.23.707923 medRxiv
Top 0.1%
41.4%
Show abstract

Cells release signaling molecules such as neurotransmitters that diffuse through the extracellular space and bind to receptors. These signaling molecules can be detected by fluorescent sensors/probes to provide images of the signaling process. Such images are not equivalent to a concentration because diffusion and sensor kinetics affect (convolute) them. Therefore, computational approaches are necessary to disentangle these contributions and allow interpretation of fluorescent sensor-based images. Here, we present a kinetic Monte Carlo framework (FLuorescence Imaging Kinetic Simulation, FLIKS) that simulates signaling molecules undergoing cellular release, stochastic diffusion and reversible binding to sensors in realistic cellular (2D or 3D) geometries. We apply it to model neurotransmitter (dopamine) release in synaptic clefts and for paracrine signaling by immune cells. We also show how sensor location, sensor kinetics and release location affect fluorescence images. For example, we show how sensor sensitivity depends on the distance from the synaptic cleft and changes when dopamine transporters (DAT) clear dopamine. The approach also allows to compare the performance of membrane bound (genetically encoded) sensors versus artificial sensors such as nanosensors placed outside under or around the cells. As an example, we also demonstrate how the images of catecholamine release by immune cells can be modeled and compared to experimental data to better understand the release pattern. This framework provides a quantitative basis for analyzing and interpreting fluorescent sensor imaging data.

11
Functional distinction between ionic and electric ephaptic effects on neuronal firing dynamics

Hauge, E.; Saetra, M. J.; Einevoll, G.; Halnes, G.

2026-03-30 neuroscience 10.64898/2026.03.26.714388 medRxiv
Top 0.1%
40.8%
Show abstract

Neuronal activity alters extracellular ion concentrations and electric potentials. Ephaptic effects refer to the feedback influence that these extracellular changes can have on neuronal activity. While electric ephaptic effects occur on a fast timescale due to extracellular potential perturbations, ionic ephaptic effects are driven by slower, accumulative changes in ion concentrations. Among the previous computational studies of ephaptic effects, the vast majority have focused exclusively on electric effects, while ionic ephaptic effects have largely been neglected. In this work, we present an electrodiffusive computational framework consisting of two-compartment neurons that interact via a shared extracellular space. By accounting for both electric potentials and ion-concentration dynamics in a self-consistent manner, our framework enables us to explore the relative roles of electric and ionic ephaptic effects. Through numerical experiments, we demonstrate that ionic and electric ephaptic interactions play very different roles. While ionic ephaptic interactions increase population firing rates, electric ephaptic interactions primarily drive subtle shifts in spike timing. Furthermore, we show that these spike shifts cause the phase difference (the distance in spike times between a small collection of neurons) to converge to a stable, unique phase difference, which we coin the ephaptic intrinsic phase preference. Author summaryNeurons predominantly communicate through synapses: specialized contact points where a brief electrical signal, known as a spike or action potential, in one neuron influences another. Neurons generate these spikes by exchanging ions with the surrounding extracellular space. This way, spiking neurons alter extracellular ion concentrations and electric potentials. Since neurons are sensitive to such changes in their environment, they can also influence one another indirectly through the shared extracellular medium. This form of non-synaptic interaction is known as ephaptic coupling. Most computational models of neuronal activity neglect ephaptic interactions, and those that include them typically consider only electric effects while ignoring ionic contributions. As a result, the relative roles of electric and ionic ephaptic effects remain poorly understood. Here, we introduce a computational framework that accounts for both mechanisms in a self-consistent way. Our results show a functional distinction: ionic ephaptic effects act slowly, regulating population firing rates, whereas electric ephaptic effects act on millisecond timescales and subtly shift spike timing. These shifts cause spike-time differences between neurons to converge to a stable value, a phenomenon we call ephaptic intrinsic phase preference.

12
Individual differences in artificial neural networks capture individual differences in human behavior

Fung, H.; Murty, N. A. R.; Rahnev, D.

2026-02-11 neuroscience 10.64898/2026.02.10.705061 medRxiv
Top 0.1%
40.0%
Show abstract

Human behavior differs substantially across individuals. While artificial neural networks (ANNs) are regarded as promising models of human perception, they are often assumed to lack such individual differences. Here, we demonstrate that multiple instances of the same ANN architecture exhibit substantial individual differences in behavior that mimic those observed in humans. We trained and tested 60 ANN instances from three architectures on a digit recognition task and found notable individual differences in overall accuracy, confidence, and response time (RT). Critically, these individual differences in ANN instances mapped consistently onto the individual differences produced by 60 humans performing the same task, with the mapping strength often approaching the human-to-human benchmark across all three behavioral metrics (accuracy, confidence, RT). The mapping generalized even across behavioral metrics: an ANN instance that aligned with an individual human on accuracy also aligned with the same individual on confidence and RT. These findings generalized to a more complex, 10-choice blurry object recognition task, though the human-ANN mapping was generally less robust than the human-human benchmark. Overall, these findings open the possibility of using ANN ensembles as computational proxies for probing the mechanisms underlying human variability.

13
From low to high transmission: Diversity-dependent responses of Plasmodium falciparum population structure to transmission intensity

Suarez-Salazar, D.; Corredor, V.; Santos-Vega, M.

2026-04-08 genetics 10.64898/2026.04.07.717068 medRxiv
Top 0.1%
39.7%
Show abstract

Genetic surveillance is increasingly used to track malaria transmission, yet genomic metrics can respond nonlinearly to changes in transmission intensity and depend on the diversity already present in the parasite population. Here, we present a stochastic agent-based model of hu-man-mosquito transmission that integrates SEIS-like epidemiological dynamics with within-host Plasmodium falciparum haplotype dynamics. By varying the maximum mosquito biting rate and the initial parasite diversity, we examine how transmission intensity and standing diversity jointly shape mixed infections, recombination, and long-term population structure across a continuous transmission gradient. Our study revealed a sequential pattern in which increasing biting intensity first increases infection prevalence and multiplicity of infection, then expands opportunities for outcrossing, and only thereafter increases effective recombination and recombinant haplotype generation. These responses are strongest in low- to intermediate transmission and tend to plateau at higher transmission levels. Initial population diversity constrains the amount of diversity that can be maintained and the magnitude of recombination output, while temporal trajectories show that haplotype evenness can pass through transient non-equilibrium phases before stabilizing. Together, these results show that the structure of the parasite population is shaped not by trans-mission intensity alone but by its interaction with standing genetic diversity. Furthermore, this study works to clarify when and how genomic metrics reliably reflect transmission conditions across heterogeneous malaria settings.

14
The resource-rational dynamics of evidence accumulation

Fang, M.; Mao, J.; Donner, T. H.; Stocker, A. A.

2026-04-20 animal behavior and cognition 10.64898/2026.04.15.718716 medRxiv
Top 0.1%
39.3%
Show abstract

Evidence accumulation is a fundamental aspect of human decision-making. However, how the precise temporal structure of evidence shapes the accumulation process has not been systematically studied. As a result, current understanding of evidence accumulation remains largely limited to its time-averaged behavior. We tested human subjects in a visual estimation task in which they inferred the angular position of an unknown source from a noisy stimulus sequence. Introducing systematic temporal perturbations, i.e., breaks of different durations and at different positions in the otherwise regular evidence sequence, revealed that subjects actively compensated for the memory loss endured during the break by dynamically enhancing evidence integration and memory maintenance immediately after the break. We derived a new time-continuous Bayesian updating model that is dynamically constrained by optimal performance-effort trade-offs. With two free parameters determining the overall resource-efficiencies of encoding and memory maintenance, the model accurately predicts the rich dependencies of subjects accumulation behavior on the evidence schedule, including subjects individual tendencies to emphasize either early (primacy) or late (recency) samples in the evidence sequence. Our results suggest that evidence accumulation is a non-stationary, dynamically controlled process that optimally balances the information gained from incoming evidence against the cognitive effort required to acquire and maintain it. The proposed model is general and should apply broadly across many task domains.

15
Is metabolism spatially optimized? Structural modeling of consecutive enzyme pairs reveals no evidence for spatial optimization of catalytic site proximity.

Algorta, J.; Walther, D.

2026-03-26 bioinformatics 10.64898/2026.03.24.713955 medRxiv
Top 0.1%
38.1%
Show abstract

Metabolic pathways are often hypothesized to benefit from the spatial organization of enzymes, facilitating substrate transfer through mechanisms such as metabolic channeling or metabolon formation. However, it remains unclear whether the spatial proximity of catalytic sites represents a general organizational principle of metabolism or is restricted to specific pathways. Here, we investigate whether consecutive enzymes in metabolic pathways, when physically interacting, exhibit structurally optimized arrangements that minimize distances between their catalytic sites, thereby increasing metabolite transfer efficiency from one enzyme to the next. We first evaluated the ability of current protein-protein interaction prediction methods, including AlphaFold2, AlphaFold3, ESMFold, and HDOCK, to model weak and transient interactions using a benchmark dataset of 112 low-affinity protein dimers from PDBbind. AlphaFold-based approaches performed best in recovering correct interaction geometries, while ESMFold showed limited performance. We further assessed several confidence metrics and identified ipTM, ipSAE, and VoroIF-GNN as the most informative predictors of correct interaction conformations. In addition to simple Euclidean distance metrics, we developed a computational procedure to estimate shortest accessible space paths between catalytic sites in predicted enzyme-enzyme complexes. Applying this framework to 107 consecutive enzyme pairs in E.coli revealed an increased tendency for consecutive enzymes to interact, but no systematic evidence that interacting enzymes position their catalytic sites in spatially optimized configurations. In the predicted complex conformations, catalytic sites tend not to be positioned closer than expected at random. The developed computational workflow provides a general framework for analyzing structural aspects of metabolic organization.

16
Detecting context-dependent selection on cancer driver genes with DiffDriver

Zhou, J.; Zhang, Q.; Song, L.; He, X.; Zhao, S.

2026-04-09 genomics 10.64898/2026.04.06.716771 medRxiv
Top 0.1%
37.9%
Show abstract

Positive selection on somatic mutations is the driving force for cancer progression. Growing evidence shows that the emergence of a driver mutation in a tumor sample depends on individual-specific factors, for example environmental exposures or the individuals germline genetic background. We term these individual-level factors as the "contexts" of a tumor. Our hypothesis is that mutations in a driver gene can bring different growth advantages in different contexts, resulting in "differential selection" on these genes in varying contexts. Identifying which contexts modulate selection strength provides critical insights into the selection forces driving tumorigenesis. However, due to the sparsity of somatic mutations and heterogeneous background mutational process across positions and individuals, identification of differential selection has limited power with current statistical tools and is prone to false positives. To address this, we developed a powerful statistical method, DiffDriver, that identifies associations between "contexts" and selection strength on a driver gene across individuals. DiffDriver accounts for variations of mutation rates across bases and individuals, while taking advantage of functional information of sequences to improve the power. Through simulations, we show DiffDriver reduces false positives and boosts power compared to current methods. Our results highlight that multiple individual-level factors create significant heterogeneity in the strength of selection acting on driver genes and 33% of driver genes showed differential selection in at least one of the contexts studied, including tumor clinical traits and tumor immune microenvironment subtypes. These results provided new insights into the context-dependent forces driving cancer evolution.

17
Efficient task generalization and humanlike face perception in models that learn to discriminate face geometry

Lee, S.; Ying, Z.; Dey, A.; Jeon, Y.-N.; Issa, E. B.

2026-02-03 neuroscience 10.64898/2026.01.31.703048 medRxiv
Top 0.1%
37.7%
Show abstract

Artificial deep neural networks (DNNs) can excel at face recognition from 2D photographs where both shape and appearance cues abound; however, DNNs have rarely been challenged to recognize faces strictly based on face geometry. Here, we show that DNNs, even those fine-tuned on face photographs, had almost no generalization performance to a new geometry-based face task, while in the opposite direction, networks fine-tuned only on geometrically defined, textureless faces readily generalized to textured faces. To learn geometry in a more practical setting with colored and textured faces, we trained discrimination on face emotion in addition to face identity, which resulted in less texture bias and generalized well across face tasks. Learning in this way from just four individuals and their expressions generalized to unseen individuals, even exceeding standard models which are trained on classifying hundreds of face identities. Compared to standard models, emotion and identity trained models developed more humanlike errors in the identities or emotions that they confused. This novel method learns in a humanlike manner using only a few individuals but enriched with expressions that widely vary face geometry - similar to early human experience during child-parent interactions. Thus, this bioinspired work has broad implications for how moving toward humanlike learning of geometry in artificial vision can be both highly sample efficient and highly performing.

18
Preventing Data Leakage in Neural Decoding

Wong, R.; Zhu, S. I.; McCullough, M. H.; Goodhill, G. J.

2026-01-27 neuroscience 10.64898/2026.01.26.701583 medRxiv
Top 0.1%
37.4%
Show abstract

Neural decoding is a widely-used machine learning technique for investigating how behavior, perception and cognition are represented in neural activity. However without careful application data leakage can occur, where information from the test set contaminates the training set, leading to biased estimates of decoding performance and potentially invalidating biological conclusions. Here we use simulated and biological datasets to demonstrate how both supervised and unsupervised data preprocessing, including dimensionality reduction, can introduce leakage in neural decoding studies. We reveal that in some cases leakage can paradoxically decrease decoding performance relative to unbiased estimates, and we provide theoretical analyses explaining how this occurs. We demonstrate that, for autocorrelated neural time series, standard k-fold cross-validation can dramatically overstate performance. Finally we provide detailed recommendations for avoiding data leakage in neural decoding.

19
Simplified model of intrinsically bursting neurons

Bhattasali, N.; Pinto, L.; Lindsay, G. W.

2026-03-05 neuroscience 10.64898/2026.03.03.709454 medRxiv
Top 0.1%
36.9%
Show abstract

Rhythmic neural activity underlies essential biological functions such as locomotion, breathing, and feeding. Computational models are widely used to study how such rhythms emerge from interactions between neuron-level and circuit-level dynamics. Intrinsically bursting neurons are key components of many central pattern generators (CPGs), yet existing models span a tradeoff between biological realism and practical usability. Biophysical models involve many parameters that are difficult to tune, whereas abstract models often integrate poorly into neural circuit simulations. We propose a simplified model of intrinsically bursting neurons derived from a reduced non-spiking biophysical formulation. The model integrates readily into neural circuits while enabling direct and independent control of bursting characteristics, including duration, amplitude, and shape. We show that the model reproduces single-unit biophysical responses to diverse stimuli as well as circuit-level activity patterns from crustacean and mammalian CPGs. This model provides a practical tool for studying rhythm generation in neural circuits.

20
Explaining temporally clustered errors with an autocorrelated Drift Diffusion Model

Vloeberghs, R.; Tuerlinckx, F.; Urai, A. E.; Desender, K.

2026-03-23 neuroscience 10.64898/2026.03.20.713186 medRxiv
Top 0.1%
35.0%
Show abstract

A widely used framework for studying the computational mechanisms of decision making is the Drift Diffusion Model (DDM). To account for the presence of both fast and slow errors in empirical data, the DDM incorporates across-trial variability in parameters such as the drift rate and the starting point. Although these variability parameters enable the model to reproduce both fast and slow errors, they rely on the assumption that over trials each parameter is independently sampled. As a result, the DDM effectively predicts that errors-- whether fast or slow--occur randomly over time. However, in empirical data this assumption is violated, as error responses are often temporally clustered. To address this limitation, we introduce the autocorrelated DDM, in which trial-to-trial fluctuations in drift rate, starting point, and boundary evolve according to first-order autoregressive (AR1) processes. Using simulations, we demonstrate that, unlike the across-trial variability DDM, the autocorrelated DDM naturally accounts for temporal clustering of errors. We further show that model parameters can be reliably recovered using Amortized Bayesian Inference, even with as few as 500 trials. Finally, fits to empirical data indicate that the autocorrelated DDM provides the best account of error clustering, highlighting that computational parameters fluctuate over time, despite typically being estimated as fixed across trials.